| Name | Version | Summary | date |
| adv-optm |
1.1.4 |
A family of highly efficient, lightweight yet powerful optimizers. |
2025-11-05 02:54:48 |
| rapidfireai |
0.12.3 |
RapidFire AI: Rapid experimentation for easier, faster, and more impactful AI customization. Built for agentic RAG, context engineering, fine-tuning, and post-training of LLMs and other DL models. |
2025-11-04 06:42:41 |
| strands-mlx |
0.2.6 |
Use MLX in Strands Agents |
2025-11-03 05:09:10 |
| frontier |
0.1.5 |
Python SDK for Frontier |
2025-10-30 06:41:59 |
| hulu-evaluate |
0.0.5 |
Client library to fine-tune and evaluate models on the HuLU benchmark. |
2025-10-29 09:44:46 |
| hugme |
0.0.1 |
Library to evaluate models on HuGME benchmark. |
2025-10-29 09:31:14 |
| bidora |
0.1.2 |
BiDoRA/LoRA fine-tuning toolkit for 3D code generation and spatial intelligence |
2025-10-27 23:34:48 |
| rewardsignal |
0.2.0 |
Python SDK for Reward Signal - Training API for Language Models |
2025-10-26 03:19:19 |
| optimum-neuron |
0.4.1 |
Optimum Neuron serves as the bridge between Hugging Face libraries, such as Transformers, Diffusers, and PEFT, and AWS Trainium and Inferentia accelerators. It provides a set of tools enabling easy model loading, training, and inference on both single and multiple Neuron core configurations, across a wide range of downstream tasks. |
2025-10-23 15:53:22 |
| optimum-habana |
1.19.1 |
Optimum Habana is the interface between the Hugging Face Transformers and Diffusers libraries and Habana's Gaudi processor (HPU). It provides a set of tools enabling easy model loading, training and inference on single- and multi-HPU settings for different downstream tasks. |
2025-10-21 16:38:20 |
| finetuning-scheduler |
2.9.0 |
A PyTorch Lightning extension that enhances model experimentation with flexible fine-tuning schedules. |
2025-10-20 22:13:31 |
| rebel-forge |
0.10.13 |
Config-driven QLoRA/LoRA fine-tuning toolkit for Rebel Forge |
2025-10-11 03:46:52 |
| seanox-ai-nlp |
1.3.0.1 |
Lightweight NLP components for semantic processing of domain-specific content. |
2025-10-09 11:17:39 |
| data-prep-toolkit |
1.0.3 |
Data Preparation Toolkit Library for Ray and Python |
2025-10-02 14:47:47 |
| data-prep-toolkit-transforms |
1.1.5 |
Data Preparation Toolkit Transforms using Ray |
2025-10-02 14:47:15 |
| llmbuilder |
1.0.2 |
Complete LLM Training and Deployment Pipeline with CLI |
2025-09-02 11:26:19 |
| adv-lion |
0.1.1 |
Pytorch Lion optimizer with updated and advanced features. |
2025-09-01 16:22:17 |
| MLorc-optim |
0.1.8 |
Unofficial implementation of Momentum Low-Rank Compression (MLorc) for memory-efficient LLM fine-tuning |
2025-08-31 15:43:40 |
| smartloop |
1.3.2 |
Smartloop Command Line interface to process documents using LLM |
2025-08-24 17:55:11 |
| medllm-finetune-rag |
0.1.2 |
A comprehensive toolkit for fine-tuning medical large language models with RAG capabilities |
2025-08-19 01:46:12 |